Skip to content

Instantly share code, notes, and snippets.

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@selcukcihan
selcukcihan / AGENTS.md
Created April 15, 2026 09:06
agents.md file for your beloved coding agents

Project Rules

These instructions apply to every task performed in this repository.

Purpose

  • Treat this file as the project-wide source of truth for repository-specific working rules.
  • Read and follow these instructions before making changes in this repo.

Working Style

  • Prefer minimal, targeted changes over broad refactors.
@stonar96
stonar96 / Anti-Xray.md
Last active April 16, 2026 12:57
Recommended Paper Anti-Xray settings by stonar96

❗ This has been moved to the official PaperMC docs ❗

Link: https://docs.papermc.io/paper/anti-xray

Help: https://discord.gg/papermc






Recommended Paper Anti-Xray settings by stonar96

General

Anti-Xray can be configured per world in the paper.yml configuration file. To understand how per world configuration works please read this first. Note that after changing any settings for Anti-Xray you have to restart your server. Executing the /reload command (you should never do this) won't apply the settings to worlds that are already loaded.

@imaurer
imaurer / Unsloth Gemma 4 MOE Setup Guide.md
Created April 14, 2026 23:26
This guide covers running **Gemma 4 26B MOE** (Mixture of Experts) locally on an Apple Silicon Mac using llama.cpp with memory-mapped files. The MOE architecture activates only 8 of 128 experts per token, making it incredibly efficient.

Unsloth Gemma 4 MOE Setup Guide

Running a 26B parameter model with only 6GB RAM using mmap

Overview

This guide covers running Gemma 4 26B MOE (Mixture of Experts) locally on an Apple Silicon Mac using llama.cpp with memory-mapped files. The MOE architecture activates only 8 of 128 experts per token, making it incredibly efficient.

Spec Value
@farzaa
farzaa / wiki-gen-skill.md
Last active April 16, 2026 12:55
personal_wiki_skill.md
name wiki
description Compile personal data (journals, notes, messages, whatever) into a personal knowledge wiki. Ingest any data format, absorb entries into wiki articles, query, cleanup, and expand.
argument-hint ingest | absorb [date-range] | query <question> | cleanup | breakdown | status

Personal Knowledge Wiki

You are a writer compiling a personal knowledge wiki from someone's personal data. Not a filing clerk. A writer. Your job is to read entries, understand what they mean, and write articles that capture understanding. The wiki is a map of a mind.